Search Results: "Francois Marier"

21 July 2014

Francois Marier: Creating a modern tiling desktop environment using i3

Modern desktop environments like GNOME and KDE involving a lot of mousing around and I much prefer using the keyboard where I can. This is why I switched to the Ion tiling window manager back when I interned at Net Integration Technologies and kept using it until I noticed it had been removed from Debian. After experimenting with awesome for 2 years and briefly considering xmonad , I finally found a replacement I like in i3. Here is how I customized it and made it play nice with the GNOME and KDE applications I use every day.

Startup script As soon as I log into my desktop, my startup script starts a few programs, including: Because of a bug in gnome-settings-daemon which makes the mouse cursor disappear as soon as gnome-settings-daemon is started, I had to run the following to disable the offending gnome-settings-daemon plugin:
dconf write /org/gnome/settings-daemon/plugins/cursor/active false

Screensaver In addition, gnome-screensaver didn't automatically lock my screen, so I installed xautolock and added it to my startup script:
xautolock -time 30 -locker "gnome-screensaver-command --lock" &
to lock the screen using gnome-screensaver after 30 minutes of inactivity. I can also trigger it manually using the following shortcut defined in my ~/.i3/config:
bindsym Ctrl+Mod1+l exec xautolock -locknow

Keyboard shortcuts While keyboard shortcuts can be configured in GNOME, they don't work within i3, so I added a few more bindings to my ~/.i3/config:
# volume control
bindsym XF86AudioLowerVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '-5%'
bindsym XF86AudioRaiseVolume exec /usr/bin/pactl set-sink-volume @DEFAULT_SINK@ -- '+5%'
# brightness control
bindsym XF86MonBrightnessDown exec xbacklight -steps 1 -time 0 -dec 5
bindsym XF86MonBrightnessUp exec xbacklight -steps 1 -time 0 -inc 5
bindsym XF86AudioMute exec /usr/bin/pactl set-sink-mute @DEFAULT_SINK@ toggle
# show battery stats
bindsym XF86Battery exec gnome-power-statistics
to make volume control, screen brightness and battery status buttons work as expected on my laptop. These bindings require the following packages:

Keyboard layout switcher Another thing that used to work with GNOME and had to re-create in i3 is the ability to quickly toggle between two keyboard layouts using the keyboard. To make it work, I wrote a simple shell script and assigned a keyboard shortcut to it in ~/.i3/config:
bindsym $mod+u exec /home/francois/bin/toggle-xkbmap

Suspend script Since I run lots of things in the background, I have set my laptop to avoid suspending when the lid is closed by putting the following in /etc/systemd/login.conf:
HandleLidSwitch=lock
Instead, when I want to suspend to ram, I use the following keyboard shortcut:
bindsym Ctrl+Mod1+s exec /home/francois/bin/s2ram
which executes a custom suspend script to clear the clipboards (using xsel), flush writes to disk and lock the screen before going to sleep. To avoid having to type my sudo password every time pm-suspend is invoked, I added the following line to /etc/sudoers:
francois  ALL=(ALL)  NOPASSWD:  /usr/sbin/pm-suspend

Window and workspace placement hacks While tiling window managers promise to manage windows for you so that you can focus on more important things, you will most likely want to customize window placement to fit your needs better.

Working around misbehaving applications A few applications make too many assumptions about window placement and are just plain broken in tiling mode. Here's how to automatically switch them to floating mode:
for_window [class="VidyoDesktop"] floating enable
You can get the Xorg class of the offending application by running this command:
xprop   grep WM_CLASS
before clicking on the window.

Keeping IM windows on the first workspace I run Pidgin on my first workspace and I have the following rule to keep any new window that pops up (e.g. in response to a new incoming message) on the same workspace:
assign [class="Pidgin"] 1

Automatically moving workspaces when docking Here's a neat configuration blurb which automatically moves my workspaces (and their contents) from the laptop screen (eDP1) to the external monitor (DP2) when I dock my laptop:
# bind workspaces to the right monitors
workspace 1 output DP2
workspace 2 output DP2
workspace 3 output DP2
workspace 4 output DP2
workspace 5 output DP2
workspace 6 output eDP1
You can get these output names by running:
xrandr --display :0   grep " connected"
Finally, because X sometimes fail to detect my external monitor when docking/undocking, I also wrote a script to set the displays properly and bound it to the appropriate key on my laptop:
bindsym XF86Display exec /home/francois/bin/external-monitor

7 May 2014

Mario Lang: Planet bug: empty alt tags for hackergotchis

There is a strange bug in Planet Debian I am seeing since I joined. It is rather minor, but since it is an accessibility bug, I'd like to mention it here. I have written to the Planet Debian maintainers, and was told to figure it out myself. This is a pattern, accessibility is considered wishlist, apparently. And the affected people are supposed to fix it on their own. It is better if I don't say anything more about that attitude.
The Bug On Planet Debian, only some people have an alt tag for their hackergotchi, while all the configured entries look similar. There is no obvious difference in the configuration, but still, only some users here have a proper alt tag for their hackergotchi. Here is a list:
  • Dirk Eddelbuettel
  • Steve Kemp
  • Wouter Verhelst
  • Mehdi (noreply@blogger.com)
  • Andrew Pollock
  • DebConf Organizers
  • Francois Marier
  • The MirOS Project (tg@mirbsd.org)
  • Paul Tagliamonte
  • Lisandro Dami n Nicanor P rez Meyer (noreply@blogger.com)
  • Joey Hess
  • Chris Lamb
  • Mirco Bauer
  • Christine Spang
  • Guido G nther
These people/organisations currently displayed on Planet Debian have a proper alt tag for their hackergotchi. All the other members have none. In Lynx, it looks like the following:
hackergotchi for
And for those where it works, it looks like:
hackergotchi for Dirk Eddelbuettel
Strange, isn't it? If you have any idea why this might be happening, let me know, or even better, tell Planet Debian maintainers how to fix it. P.S.: Package planet-venus says it is a rewrite of Planet, and Planet can be found in Debian as well. I don't see it in unstable, maybe I am blind? Or has it been removed recently? If so, the package description of planet-venus is wrong.

4 May 2014

Francois Marier: What's in a debian/ directory?

If you're looking to get started at packaging free software for Debian, you should start with the excellent New Maintainers' Guide or the Introduction to Debian Packaging on the Debian wiki. Once you know the basics, or if you prefer to learn by example, you may be interested in the full walkthrough which follows. We will look at the contents of three simple packages.

node-libravatar This package is a node.js library for the Libravatar service. Version 2.0.0-3 of that package contains the following files in its debian/ directory:
  • changelog
  • compat
  • control
  • copyright
  • docs
  • node-libravatar.install
  • rules
  • source/format
  • watch

debian/control
Source: node-libravatar
Priority: extra
Maintainer: Francois Marier <francois@debian.org>
Build-Depends: debhelper (>= 9)
Standards-Version: 3.9.4
Section: web
Homepage: https://github.com/fmarier/node-libravatar
Vcs-Git: git://git.debian.org/collab-maint/node-libravatar.git
Vcs-Browser: http://git.debian.org/?p=collab-maint/node-libravatar.git;a=summary
Package: node-libravatar
Architecture: all
Depends: $ shlibs:Depends , $ misc:Depends , nodejs
Description: libravatar library for NodeJS
 This library allows web application authors to make use of the free Libravatar
 service (https://www.libravatar.org). This service hosts avatar images for
 users and allows other sites to look them up using email addresses.
 .
 node-libravatar includes full support for federated avatar servers.
This is probably the most important file since it contains the bulk of the metadata about this package. Maintainer is a required field listing the maintainer of that package, which can be a person or a team. It only contains a single value though, any co-maintainers will be listed under the optional Uploaders field. Build-Depends lists the packages which are needed to build the package (e.g. a compiler), as opposed to those which are needed to install the binary package (e.g. a library it uses). Standards-Version refers to the version of the Debian Policy that this package complies with. The Homepage field refers to the upstream homepage, whereas the Vcs-* fields point to the repository where the packaging is stored. If you take a look at the node-libravatar packaging repository you will see that it contains three branches:
  • upstream is the source as it was in the tarball downloaded from upstream.
  • master is the upstream branch along with all of the Debian customizations.
  • pristine-tar is unrelated to the other two branches and is used by the pristine-tar tool to reconstitute the original upstream tarball as needed.
After these fields comes a new section which starts with a Package field. This is the definition of a binary package, not to be confused with the Source field at the top of this file, which refers to the name of the source package. In this particular example, they are both the same and there is only one of each, however this is not always the case, as we'll see later. Inside that binary package definition, lives the Architecture field which is normally one of these two:
  • all for a binary package that will work on all architectures but only needs to be built once
  • any for a binary package that will work everywhere but that will need to be built separately for each architecture
Finally, the last field worth pointing out is the Depends field which lists all of the runtime dependencies that the binary package has. This is what will be pulled in by apt-get when you apt-get install node-libravatar. The two variables will be substituted later by debhelper.

debian/changelog
node-libravatar (2.0.0-3) unstable; urgency=low
  * debian/watch: poll github directly
  * Bump Standards-Version up to 3.9.4
 -- Francois Marier <francois@debian.org>  Mon, 20 May 2013 12:07:49 +1200
node-libravatar (2.0.0-2) unstable; urgency=low
  * More precise license tag and upstream contact in debian/copyright
 -- Francois Marier <francois@debian.org>  Tue, 29 May 2012 22:51:03 +1200
node-libravatar (2.0.0-1) unstable; urgency=low
  * New upstream release
    - new non-backward-compatible API
 -- Francois Marier <francois@debian.org>  Mon, 07 May 2012 14:54:19 +1200
node-libravatar (1.1.1-1) unstable; urgency=low
  * Initial release (Closes: #661771)
 -- Francois Marier <francois@debian.org>  Fri, 02 Mar 2012 15:29:57 +1300
This may seem at first like a mundane file, but it is very important since it is the canonical source of the package version (2.0.0-3 in this case). This is the only place where you need to bump the package version when uploading a new package to the Debian archive. The first line also includes the distribution where the package will be uploaded. It is usually one of these values:
  • unstable for the vast majority of uploads
  • stable for uploads that have been approved by the release maintainers and fix serious bugs in the stable version of Debian
  • stable-security for security fixes to the stable version of Debian that cannot wait until the next stable point release and have been approved by the security team
Packages uploaded to unstable will migrate automatically to testing provided that a few conditions are met (e.g. no release-critical bugs were introduced). The length of time before that migration is influenced by the urgency field (low, medium or high) in the changelog entry. Another thing worth noting is that the first upload normally needs to close an ITP (Intent to Package) bug.

debian/rules
#!/usr/bin/make -f
# -*- makefile -*-
%:
    dh $@ 
override_dh_auto_test:
As can be gathered from the first two lines of this file, this is a Makefile. This is what controls how the package is built. There's not much to see and that's because most of its content is automatically added by debhelper. So let's look at it in action by building the package:
$ git buildpackage -us -uc
and then looking at parts of the build log (../node-libravatar_2.0.0-3_amd64.build):
 fakeroot debian/rules clean
dh clean 
   dh_testdir
   dh_auto_clean
   dh_clean
One of the first things we see is the debian/rules file being run with the clean target. To find out what that does, have a look at the dh_auto_clean which states that it will attempt to delete build residues and run something like make clean using the upstream Makefile.
 debian/rules build
dh build 
   dh_testdir
   dh_auto_configure
   dh_auto_build
Next we see the build target being invoked and looking at dh_auto_configure we see that this will essentially run ./configure and its equivalents. The dh_auto_build helper script then takes care of running make (or equivalent) on the upstream code. This should be familiar to anybody who has ever built a piece of free software from scratch and has encountered the usual method for building from source:
./configure
make
make install
Finally, we get to actually build the .deb:
 fakeroot debian/rules binary
dh binary 
   dh_testroot
   dh_prep
   dh_installdirs
   dh_auto_install
   dh_install
...
   dh_md5sums
   dh_builddeb
dpkg-deb: building package  node-libravatar' in  ../node-libravatar_2.0.0-3_all.deb'.
Here we see a number of helpers, including dh_auto_install which takes care of running make install. Going back to the debian/rules, we notice that there is manually defined target at the bottom of the file:
override_dh_auto_test:
which essentially disables dh_auto_test by replacing it with an empty set of commands. The reason for this becomes clear when we take a look at the test target of the upstream Makefile and the dependencies it has: tap, a node.js library that is not yet available in Debian. In other words, we can't run the test suite on the build machines so we need to disable it here.

debian/compat
9
This file simply specifies the version of debhelper that is required by the various helpers used in debian/rules. Version 9 is the latest at the moment.

debian/copyright
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: node-libravatar
Upstream-Contact: Francois Marier <francois@libravatar.org>
Source: https://github.com/fmarier/node-libravatar
Files: *
Copyright: 2011 Francois Marier <francois@libravatar.org>
License: Expat
Files: debian/*
Copyright: 2012 Francois Marier <francois@debian.org>
License: Expat
License: Expat
 Permission is hereby granted, free of charge, to any person obtaining a copy of this
 software and associated documentation files (the "Software"), to deal in the Software
 without restriction, including without limitation the rights to use, copy, modify,
 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
 permit persons to whom the Software is furnished to do so, subject to the following
 conditions:
 .
 The above copyright notice and this permission notice shall be included in all copies
 or substantial portions of the Software.
 .
 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
 CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
 OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
This machine-readable file lists all of the different licenses encountered in this package. It requires that the maintainer audits the upstream code for any copyright statements that might be present in addition to the license of the package as a whole.

debian/docs
README.md
This file contains a list of upstream files that will be copied into the /usr/share/doc/node-libravatar/ directory by dh_installdocs.

debian/node-libravatar.install
lib/*    usr/lib/nodejs/
The install file is used by dh_install to supplement the work done by dh_auto_install which, as we have seen earlier, essentially just runs make install on the upstream Makefile. Looking at that upstream Makefile, it becomes clear that the files will need to be installed manually by the Debian package since that Makefile doesn't have an install target.

debian/watch
version=3
https://github.com/fmarier/node-libravatar/tags /fmarier/node-libravatar/archive/node-libravatar-([0-9.]+)\.tar\.gz
This is the file that allows Debian tools like the Package Tracking System to automatically detect that a new upstream version is available. What it does is simply visit the upstream page which contains all of the release tarballs and look for links which have an href matching the above regular expression. Running uscan --report --verbose will show us all of the tarballs that can be automatically discovered using this watch file:
-- Scanning for watchfiles in .
-- Found watchfile in ./debian
-- In debian/watch, processing watchfile line:
   https://github.com/fmarier/node-libravatar/tags /fmarier/node-libravatar/archive/node-libravatar-([0-9.]+)\.tar\.gz
-- Found the following matching hrefs:
     /fmarier/node-libravatar/archive/node-libravatar-2.0.0.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.1.1.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.1.0.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.0.1.tar.gz
     /fmarier/node-libravatar/archive/node-libravatar-1.0.0.tar.gz
Newest version on remote site is 2.0.0, local version is 2.0.0
 => Package is up to date
-- Scan finished

pylibravatar This second package is the equivalent Python library for the Libravatar service. Version 1.6-2 of that package contains similar files in its debian/ directory, but let's look at two in particular:
  • control
  • upstream/signing-key.asc

debian/control
Source: pylibravatar
Section: python
Priority: optional
Maintainer: Francois Marier <francois@debian.org>
Build-Depends: debhelper (>= 9), python-all, python3-all
Standards-Version: 3.9.5
Homepage: https://launchpad.net/pyLibravatar
...
Package: python-libravatar
Architecture: all
Depends: $ misc:Depends , $ python:Depends , python-dns, python
Description: Libravatar module for Python 2
 Module to make use of the federated Libravatar.org avatar hosting service
 from within Python applications.
...
Package: python3-libravatar
Architecture: all
Depends: $ misc:Depends , $ python3:Depends , python3-dns, python3
Description: Libravatar module for Python 3
 Module to make use of the federated Libravatar.org avatar hosting service
 from within Python applications.
...
Here is an example of a source package (pylibravatar) which builds two separate binary packages: python-libravatar and python3-libravatar. This highlights the fact that a given upstream source can be split into several binary packages in the archive when it makes sense. In this case, there is no point in Python 2 applications pulling in the Python 3 files, so the two separate packages make sense. Another common example is the use of a -doc package to separate the documentation from the rest of a package so that it doesn't need to be installed on production servers for example.

debian/upstream/signing-key.asc
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQINBEpQYz4BEAC7REQD1za69RUnkt6nRCFhSJmmoeJc+yEiWTKc9GOIMAwJDme1
+CMYgVn4Xzf1VQYwD/lE+mfWgyeMomLQjDM1mxx/LOM2a1WWPOk9+PvQwKfRJy92
...
UxDtZm/4yUmU6KvHvOGiDCMuIiB+MqhqJJ5wf80wXhzu8nmC+fyGt6nvu0ggMle8
sAMgXt/aQUTZE5zNCQ==
=RkTO
-----END PGP PUBLIC KEY BLOCK-----
This is simply the OpenPGP key that the upstream developer uses to sign release tarballs. Since PGP signatures are available on the upstream download page, it's possible to instruct uscan to check signatures before downloading tarballs. The way to do that is to use the pgpsigurlmange option in debian/watch:
version=3
opts=pgpsigurlmangle=s/$/.asc/ https://pypi.python.org/pypi/pyLibravatar https://pypi.python.org/packages/source/p/pyLibravatar/pyLibravatar-(.*)\.tar\.gz
which is simply a regular expression replacement string which takes the tarball URL and converts it to the URL of the matching PGP signature.

fcheck The last package we will look at is a file integrity checker. It essentially goes through all of the files in /usr/bin/ and /usr/lib/ and stores a hash of them in its database. When one of these files changes, you get an email. In particular, we will look at the following files in the debian/ directory of version 2.7.59-18:
  • dirs
  • fcheck.cron.d
  • fcheck.postrm
  • fcheck.postinst
  • patches/
  • README.Debian
  • rules
  • source/format

debian/patches This directory contains ten patches as well as a file called series which lists the patches that should be applied to the upstream source and in which order. Should you need to temporarily disable a patch, simply remove it from this file and it will no longer be applied at build time. Let's have a look at patches/04_cfg_sha256.patch:
Description: Switch to sha256 hash algorithm
Forwarded: not needed
Author: Francois Marier <francois@debian.org>
Last-Update: 2009-03-15
--- a/fcheck.cfg
+++ b/fcheck.cfg
@@ -149,8 +149,7 @@ TimeZone        = EST5EDT
 #$Signature      = /usr/bin/sum
 #$Signature      = /usr/bin/cksum
 #$Signature      = /usr/bin/md5sum
-$Signature      = /bin/cksum
-
+$Signature      = /usr/bin/sha256sum
 # Include an optional configuration file.
This is a very simple patch which changes the default configuration of fcheck to promote the use of a stronger hash function. At the top of the file is a bunch of metadata in the DEP-3 format. Why does this package contain so many customizations to the upstream code when Debian's policy is to push fixes upstream and work towards reduce the delta between upstream and Debian's code? The answer can be found in debian/control:
Homepage: http://web.archive.org/web/20050415074059/www.geocities.com/fcheck2000/
This package no longer has an upstream maintainer and its original source is gone. In other words, the Debian package is where all of the new bug fixes get done.

debian/source/format
3.0 (quilt)
This file contains what is called the source package format. What it basically says is that the patches found in debian/patches/ will be applied to the upstream source using the quilt tool at build time.

debian/fcheck.postrm
#!/bin/sh
# postrm script for fcheck
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#        * <postrm>  remove'
#        * <postrm>  purge'
#        * <old-postrm>  upgrade' <new-version>
#        * <new-postrm>  failed-upgrade' <old-version>
#        * <new-postrm>  abort-install'
#        * <new-postrm>  abort-install' <old-version>
#        * <new-postrm>  abort-upgrade' <old-version>
#        * <disappearer's-postrm>  disappear' <overwriter>
#          <overwriter-version>
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
case "$1" in
    remove upgrade failed-upgrade abort-install abort-upgrade disappear)
    ;;
    purge)
      if [ -e /var/lib/fcheck/fcheck.dbf ]; then
        echo "Purging old database file ..."
        rm -f /var/lib/fcheck/fcheck.dbf
      fi
      rm -rf /var/lib/fcheck
      rm -rf /var/log/fcheck
      rm -rf /etc/fcheck
    ;;
    *)
        echo "postrm called with unknown argument \ $1'" >&2
        exit 1
    ;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0
This script is one of the many possible maintainer scripts that a package can provide if needed. This particular one, as the name suggests, will be run after the package is removed (apt-get remove fcheck) or purged (apt-get remove --purge fcheck). Looking at the case statement above, it doesn't do anything extra in the remove case, but it deletes a few files and directories when called with the purge argument.

debian/README.Debian This optional README file contains Debian-specific instructions that might be useful to users. It supplements the upstream README which is often more generic and cannot assume a particular system configuration.

debian/rules
#!/usr/bin/make -f
# -*- makefile -*-
# Sample debian/rules that uses debhelper.
# This file was originally written by Joey Hess and Craig Small.
# As a special exception, when this file is copied by dh-make into a
# dh-make output file, you may use that output file without restriction.
# This special exception was added by Craig Small in version 0.37 of dh-make.
# Uncomment this to turn on verbose mode.
#export DH_VERBOSE=1
build-arch:
build-indep:
build: build-stamp
build-stamp:
    dh_testdir
    pod2man --section=8 $(CURDIR)/debian/fcheck.pod > $(CURDIR)/fcheck.8
    touch build-stamp
clean:
    dh_testdir
    dh_testroot
    rm -f build-stamp 
    rm -f $(CURDIR)/fcheck.8
    dh_clean
install: build
    dh_testdir
    dh_testroot
    dh_prep
    dh_installdirs
    cp $(CURDIR)/fcheck $(CURDIR)/debian/fcheck/usr/sbin/fcheck
    cp $(CURDIR)/fcheck.cfg $(CURDIR)/debian/fcheck/etc/fcheck/fcheck.cfg
# Build architecture-independent files here.
binary-arch: build install
# Build architecture-independent files here.
binary-indep: build install
    dh_testdir
    dh_testroot
    dh_installdocs
    dh_installcron
    dh_installman fcheck.8
    dh_installchangelogs
    dh_installexamples
    dh_installlogcheck
    dh_link
    dh_strip
    dh_compress
    dh_fixperms
    dh_installdeb
    dh_shlibdeps
    dh_gencontrol
    dh_md5sums
    dh_builddeb
binary: binary-indep binary-arch
.PHONY: build clean binary-indep binary-arch binary install
This is an example of a old-style debian/rules file which you still encounter in packages which haven't yet upgraded to the latest version of debhelper 9, as can be shown by the contents of debian/compat:
8
It does essentially the same thing that what we've seen in the build log, but in a more verbose way.

debian/dirs
usr/sbin
etc/fcheck
This file contains a list of directories that dh_installdirs will create in the build directory. The reason why these directories need to be created is that files are copied into these directories in the install target of the debian/rules file. Note that this is different from directories which are created at the time of installation of the package. In that case, the directory (e.g. /var/log/fcheck/) must be created in the postinst script and removed in the postrm script.

debian/fcheck.cron.d
#
# Regular cron job for the fcheck package
#
30 */2  * * *   root    test -x /usr/sbin/fcheck && if ! nice ionice -c3 /usr/sbin/fcheck -asxrf /etc/fcheck/fcheck.cfg >/var/run/fcheck.out 2>&1; then mailx -s "ALERT: [fcheck]  hostname --fqdn " root </var/run/fcheck.out ; /usr/sbin/fcheck -cadsxlf /etc/fcheck/fcheck.cfg ; fi ; rm -f /var/run/fcheck.out
This file is the cronjob which drives the checks performed by this package. It will be copied to /etc/cron.d/fcheck by dh_installcron.

1 March 2014

Francois Marier: Using vnc to do remote tech support over high-latency networks

If you ever find yourself doing a bit of technical support for relatives over the phone, there's nothing like actually seeing what they are doing on their computer. One of the best tools for such remote desktop sharing is vnc. Here's the best setup I have come up with so far. If you have any suggestions, please leave a comment!

Basic vnc configuration First off, you need two things: a vnc server on your relative's machine and a vnc client on yours. Thanks to vnc being an open protocol, there are many choices for both. I eventually settled on x11vnc for the server and ssvnc for the client. They are both available in the standard Debian and Ubuntu repositories. Since I have ssh access on the machine that needs to run the server, I simply login and then run x11vnc. Here's what ~/.x11vnrc contains:
noxdamage
That option appears to be necessary when the desktop to share is running gnome-shell / compiz. Afterwards, I start the client on my laptop with the following command:
ssvncviewer -encodings zrle -scale 1280x775 localhost
The scaling factor is simply the resolution of the client minus any window decorations.

ssh configuration As you can see above, the client is not connecting directly to the server. Instead it's connecting to its own vnc port (localhost:5900). That's because I'm tunelling the traffic through the ssh connection in order to avoid relying on vnc extensions for authentication and encryption. Here's what the client's ~/.ssh/config needs for that simple use case:
Host server.example.com:
  LocalForward 5900 127.0.0.1:5900
If the remote host (which has an internal IP address of 192.168.1.2 in this example) is not connected directly to the outside world and instead goes through a gateway, then your ~/.ssh/config will look like this:
Host gateway.example.com:
  ForwardAgent yes
  LocalForward 5900 192.168.1.2:5900
Host server.example.com:
  ProxyCommand ssh -q -a gateway.example.com nc -q0 %h 22
and the remote host will need to open up a port on its firewall for the gateway (internal IP address of 192.168.1.1 here):
iptables -A INPUT -p tcp --dport 5900 -s 192.168.1.1/32 -j ACCEPT

Optimizing for high-latency networks Since I do most of my tech support over a very high latency network, I tweaked the default vnc settings to reduce the amount of network traffic. I added this to ~/.x11vncrc on the vnc server:
ncache 10
ncache_cr
and changed the client command line to this:
ssvncviewer -compresslevel 9 -quality 3 -bgr233 -encodings zrle -use64 -scale 1280x775 -ycrop 1024 localhost
This decreases image quality (and required bandwidth) and enables client-side caching. The magic 1024 number is simply the full vertical resolution of the remote machine, which sports a vintage 1280x1024 LCD monitor.

20 February 2014

Francois Marier: Hardening ssh Servers

Basic configuration There are a few basic things that most admins will already know (and that tiger will warn you about if you forget):
  • only allow version 2 of the protocol
  • disable root logins
  • disable password authentication
This is what /etc/ssh/sshd_config should contain:
Protocol 2
PasswordAuthentication no
PermitRootLogin no

Whitelist approach to giving users ssh access To ensure that only a few users have ssh access to the server and that newly created users don't have it enabled by default, create a new group:
addgroup sshuser
and then add the relevant users to it:
adduser francois sshuser
Finally, add this to /etc/ssh/sshd_config:
AllowGroups sshuser

Deterring brute-force (or dictionary) attacks One way to ban attackers who try to brute-force your ssh server is to install the fail2ban package. It keeps an eye on the ssh log file (/var/log/auth.log) and temporarily blocks IP addresses after a number of failed login attempts. Another approach is to hide the ssh service using Single-Packet Authentication. I have fwknop installed on some of my servers and use small wrapper scripts to connect to them.

Using restricted shells For those users who only need an ssh account on the server in order to transfer files (using scp or rsync), it's a good idea to set their shell (via chsh) to a restricted one like rssh. Should they attempt to log into the server, these users will be greeted with the following error message:
This account is restricted by rssh.
Allowed commands: rsync 
If you believe this is in error, please contact your system administrator.
Connection to server.example.com closed.

Restricting authorized keys to certain IP addresses In addition to listing all of the public keys that are allowed to log into a user account, the ~/.ssh/authorized_keys file also allows (as the man page points out) a user to impose a number of restrictions. Perhaps the most useful option is from which allows a user to restrict the IP addresses which can login using a specific key. Here's what one of my authorized_keys looks like:
from="192.0.2.2" ssh-rsa AAAAB3Nz...zvCn bot@example
You may also want to include the following options to each entry: no-X11-forwarding, no-user-rc, no-pty, no-agent-forwarding and no-port-forwarding.

Increasing the amount of logging The first thing I'd recommend is to increase the level of verbosity in /etc/ssh/sshd_config:
LogLevel VERBOSE
which will, amongst other things, log the fingerprints of keys used to login:
sshd: Connection from 192.0.2.2 port 39671
sshd: Found matching RSA key: de:ad:be:ef:ca:fe
sshd: Postponed publickey for francois from 192.0.2.2 port 39671 ssh2 [preauth]
sshd: Accepted publickey for francois from 192.0.2.2 port 39671 ssh2 
Secondly, if you run logcheck and would like to whitelist the "Accepted publickey" messages on your server, you'll have to start by deleting the first line of /etc/logcheck/ignore.d.server/sshd. Then you can add an entry for all of the usernames and IP addresses that you expect to see. Finally, it is also possible to log all commands issued by a specific user over ssh by enabling the pam_tty_audit module in /etc/pam.d/sshd:
session required pam_tty_audit.so enable=francois
However this module is not included in wheezy and has only recently been re-added to Debian.

Identitying stolen keys One thing I'd love to have is a way to identify a stolen public key. Given the IP restrictions described above, if a public key is stolen and used from a different IP, I will see something like this in /var/log/auth.log:
sshd: Connection from 198.51.100.10 port 39492
sshd: Authentication tried for francois with correct key but not from a permitted host (host=198.51.100.10, ip=198.51.100.10).
sshd: Failed publickey for francois from 198.51.100.10 port 39492 ssh2
sshd: Connection closed by 198.51.100.10 [preauth]
So I can get the IP address of the attacker (likely to be a random VPS or a Tor exit node), but unfortunately, the key fingerprints don't appear for failed connections like they do for successful ones. So I don't know which key to revoke. Is there any way to identify which key was used in a failed login attempt or is the solution to only ever have a single public key in each authorized_keys file and create a separate user account for each user?

2 January 2014

Francois Marier: Running your own XMPP server on Debian or Ubuntu

In order to get closer to my goal of reducing my dependence on centralized services, I decided to setup my own XMPP / Jabber server on a Linode VPS running Debian wheezy. I chose ejabberd since it was recommended by the RTC Quick Start website and here's how I put everything together.

DNS and SSL My personal domain is fmarier.org and so I created the following DNS records:
jabber-gw            CNAME    fmarier.org.
_xmpp-client._tcp    SRV      5 0 5222 jabber-gw.fmarier.org.
_xmpp-server._tcp    SRV      5 0 5269 jabber-gw.fmarier.org.
Then I went to get a free XMPP SSL certificate for jabber-gw.fmarier.org from StartSSL. This is how I generated the CSR (Certificate Signing Request) on a high-entropy machine:
openssl req -new -newkey rsa:2048 -nodes -out ssl.csr -keyout ssl.key -subj "/C=NZ/CN=jabber-gw.fmarier.org"
I downloaded the signed certificate as well as the StartSSL intermediate certificate and combined them this way:
cat ssl.crt ssl.key sub.class1.server.ca.pem > ejabberd.pem

ejabberd installation Installing ejabberd on Debian is pretty simple and I mostly followed the steps on the Ubuntu wiki with an additional customization to solve the Pidgin "Not authorized" connection problems.
  1. Install the package, using "admin" as the username for the administrative user:
    apt-get install ejabberd
    
  2. Set the following in /etc/ejabberd/ejabberd.cfg (don't forget the trailing dots!):
     acl, admin,  user, "admin", "fmarier.org" .
     hosts, ["fmarier.org"] .
     fqdn, "jabber-gw.fmarier.org" .
    
  3. Copy the SSL certificate into the /etc/ejabberd/ directory and set the permissions correctly:
    chown root:ejabberd /etc/ejabberd/ejabberd.pem
    chmod 640 /etc/ejabberd/ejabberd.pem
    
  4. Restart the ejabberd daemon:
    /etc/init.d/ejabberd restart
    
  5. Create a new user account for yourself:
    ejabberdctl register me fmarier.org P@ssw0rd1!
    
  6. Open up the following ports on the server's firewall:
    iptables -A INPUT -p tcp --dport 5222 -j ACCEPT
    iptables -A INPUT -p tcp --dport 5269 -j ACCEPT
    

Client setup On the client side, if you use Pidgin, create a new account with the following settings in the "Basic" tab:
  • Protocol: XMPP
  • Username: me
  • Domain: fmarier.org
  • Password: P@ssw0rd1!
and the following setting in the "Advanced" tab:
  • Connection security: Require encryption
From this, I was able to connect to the server without clicking through any certificate warnings. If you want to make sure that XMPP federation works, add your GMail address as a buddy to the account and send yourself a test message. In this example, the XMPP address I give to my friends is me@fmarier.org.

22 December 2013

Francois Marier: Creating a Linode-based VPN setup using OpenVPN on Debian or Ubuntu

Using a Virtual Private Network is a good way to work-around geoIP restrictions but also to protect your network traffic when travelling with your laptop and connecting to untrusted networks. While you might want to use Tor for the part of your network activity where you prefer to be anonymous, a VPN is a faster way to connect to sites that already know you. Here are my instructions for setting up OpenVPN on Debian / Ubuntu machines where the VPN server is located on a cheap Linode virtual private server. They are largely based on the instructions found on the Debian wiki. An easier way to setup an ad-hoc VPN is to use sshuttle but for some reason, it doesn't seem work on Linode or Rackspace virtual servers.

Generating the keys Make sure you run the following on a machine with good entropy and not a VM! I personally use a machine fitted with an Entropy Key. The first step is to install the required package:
sudo apt-get install openvpn
Then, copy the following file in your home directory (no need to run any of this as root):
mkdir easy-rsa
cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ easy-rsa/
cd easy-rsa/2.0
and put something like this in your ~/easy-rsa/2.0/vars:
export KEY_SIZE=2084
export KEY_COUNTRY="NZ"
export KEY_PROVINCE="AKL"
export KEY_CITY="Auckland"
export KEY_ORG="fmarier.org"
export KEY_EMAIL="francois@fmarier.org"
export KEY_CN=hafnarfjordur.fmarier.org
export KEY_NAME=hafnarfjordur.fmarier.org
export KEY_OU=VPN
Create this symbolic link:
ln -s openssl-1.0.0.cnf openssl.cnf
and generate the keys:
. ./vars
./clean-all
./build-ca
./build-key-server server  # press ENTER at every prompt, no password
./build-key akranes  # "akranes" as Name, no password
./build-dh
/usr/sbin/openvpn --genkey --secret keys/ta.key

Configuring the server On my server, a Linode VPS called hafnarfjordur.fmarier.org, I installed the openvpn package:
apt-get install openvpn
and then copied the following files from my high-entropy machine:
cp ca.crt dh2048.pem server.key server.crt ta.key /etc/openvpn/
chown root:root /etc/openvpn/*
chmod 600 /etc/openvpn/ta.key /etc/openvpn/server.key
Then I took the official configuration template:
cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz /etc/openvpn/
gunzip /etc/openvpn/server.conf.gz
and set the following in /etc/openvpn/server.conf:
dh dh2048.pem
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 74.207.241.5"
push "dhcp-option DNS 74.207.242.5"
tls-auth ta.key 0
cipher AES-128-CBC
user nobody
group nogroup
(These DNS servers are the ones I found in /etc/resolv.conf on my Linode VPS.) Finally, I added the following to these configuration files:
  • /etc/sysctl.conf:
    net.ipv4.ip_forward=1
    
  • /etc/rc.local (just before exit 0):
    iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE
    
  • /etc/default/openvpn:
    AUTOSTART="all"
    
and ran sysctl -p before starting OpenVPN:
/etc/init.d/openvpn start
If the server has a firewall, you'll need to open up this port:
iptables -A INPUT -p udp --dport 1194 -j ACCEPT

Configuring the client The final piece of this solution is to setup my laptop, akranes, to connect to hafnarfjordur by installing the relevant Network Manager plugin:
apt-get install network-manager-openvpn-gnome
The laptop needs these files from the high-entropy machine:
cp ca.crt akranes.crt akranes.key ta.key /etc/openvpn/
chown root:francois /etc/openvpn/akranes.key /etc/openvpn/ta.key
chmod 640 /etc/openvpn/ta.key /etc/openvpn/akranes.key
and my own user needs to have read access to the secret keys. To create a new VPN, right-click on Network-Manager and add a new VPN connection of type "OpenVPN":
  • Gateway: hafnarfjordur.fmarier.org
  • Type: Certificates (TLS)
  • User Certificate: /etc/openvpn/akranes.crt
  • CA Certificate: /etc/openvpn/ca.crt
  • Private Key: /etc/openvpn/akranes.key
  • Available to all users: NO
then click the "Avanced" button and set the following:
  • General
    • Use LZO data compression: YES
  • Security
    • Cipher: AES-128-CBC
    • HMAC Authentication: Default
  • TLS Authentication
    • Subject Match: server
    • Verify peer (server) certificate usage signature: YES
    • Remote peer certificate TLS type: Server
    • Use additional TLS authentication: YES
    • Key File: /etc/openvpn/ta.key
    • Key Direction: 1

Debugging If you run into problems, simply take a look at the logs while attempting to connect to the server:
tail -f /var/log/syslog
on both the server and the client. In my experience, searching for the error messages you find in there is usually enough to solve the problem.

Next steps The next thing I'm going to add to this VPN setup is a local unbound DNS resolver that will be offered to all clients. Is there anything else you have in your setup and that I should consider adding to mine?

19 November 2013

Francois Marier: Things that work well with Tor

Tor is a proxy server which allows its users to hide their IP address from the websites they connect to. In order to provide this level of anonymity however, it introduces latency into these connections, an unfortunate performance-privacy trade-off which means that few users choose to do all of their browsing through Tor. Here are a few things that I have found work quite well through Tor. If there are any other interesting use cases I've missed, please leave a comment!

Tor setup There are already great docs on how to install and configure the Tor server and the only thing I would add is that I've found that having a Polipo proxy around is quite useful for those applications that support HTTP proxies but not SOCKS proxies. On Debian, it's just a matter of installing the polipo package and then configuring it as it used to be recommended by the Tor project.

RSS feeds The whole idea behind RSS feeds is that articles are downloaded in batch ahead of time. In other words, latency doesn't matter. I use akregator to read blogs and the way to make it fetch articles over Tor is to change the KDE-wide proxy server using systemsettings and setting a manual proxy of localhost on port 8008 (i.e. the local instance of Polipo). Similarly, I use podget to automatically fetch podcasts through this cron job in /etc/cron.d/podget-francois:
0 12 * * 1-5 francois   http_proxy=http://localhost:8008/ https_proxy=http://localhost:8008/ nocache nice ionice -n7 /usr/bin/podget -s
Prior to that, I was using hpodder and had the following in ~/.hpodder/curlrc:
proxy=socks4a://localhost:9050

GnuPG For those of us using the GNU Privacy Guard to exchange encrypted emails, keeping our public keyring up to date is important since it's the only way to ensure that revoked keys are taken into account. The script I use for this runs once a day and has the unfortunate side effect of revealing the contents of my address book to the keyserver I use. Therefore, I figured that I should at least hide my IP address by putting the following in ~/.gnupg/gpg.conf:
keyserver-options http-proxy=http://127.0.0.1:8008
However, that tends to makes key submission fail and so I created a key submission alias in my ~/.bashrc which avoids sending keys through Tor:
alias gpgsendkeys='gpg --send-keys --keyserver-options http-proxy=""'

Instant messaging Communication via XMPP is another use case that's not affected much by a bit of extra latency. To get Pidgin to talk to an XMPP server over Tor, simply open "Tools Preferences" and set a SOCKS5 (not Tor/Privacy) proxy of localhost on port 9050.

GMail Finally, I found that since I am running GMail in a separate browser profile, I can take advantage of GMail's excellent caching and preloading and run the whole thing over Tor by setting that entire browser profile to run its traffic through the Tor SOCKS proxy on port 9050.

25 October 2013

Enrico Zini: A vision wanted

A vision wanted Today Richard Stallman mailed all Italian LUGs asking that tomorrow's LinuxDay be called "GNU/Linux Day" instead. I wonder how that is ever going to help a community so balkanised, that the only way Italian LUGs manage to do something together, is to say "let's not care what we all do, let's just do it on the same day and call it a national event". Of course a few LUGs still make a point of not doing anything on that day, because you know, Judean People's Front. Cawk. Today a friend asked me if I could help her support people in installing Citrix Whatsit to set up a video conference to network meetings that will take place in a month in different cities. Those meetings are something I look forwad to. It wasn't much of a problem to say "no, I can't do that"; it was a problem to be unable to come up with some viable, Free alternatives. I sometimes have to use Skype to talk with friends who also are Debian Developers, because I still haven't managed to make VoIP work unless I go through a commercial proxy. There was the happy news that our regional administration is switching from MS Word to OpenOffice. It soon became a flamewar, because some people loudly complained that they should have used LibreOffice instead. At DebConf, after spending an hour getting frustrated with the default formatting of bullet points in WhateverOffice Impress, I did my Debian Contributors talk using a text file in vim. And it was a success! Thanks Francois Marier for maintaining cowsay. I can't sync contact lists and appointments between my N900, which runs a Debian derivative, and my laptop, because I don't want to have a Google account, and nothing else would work out of the box. I don't even know how to keep a shared calendar with other DDs, without using a 3rd party cloud service that I don't want to trust with my life's personal schedule. I need to do a code review of every vim plugin I need to use, because you can only get them by cloning GitHub repositories over plain http, and they don't even provide signed tags. That code is run with my own privileges every time I start a text editor, which is, like, all the time. I'm frightened at the idea of how many people blissfully don't think about what that means. Vim users. Programmers. Cool chaps. Yet the most important thing in Debian today seems to be yet another crusade between upstart and systemd. But we haven't had a lengthy discussion on why, although the excellent OpenStreetMap exists and many of us contribute to it, it seems to still be more immediate to hit Google Maps to get a route computed. How can we change that? We haven't had a lengthy discussion on what can we offer to allow anyone to set up some social platform that won't get swamped with spam the first day and cracked open the second; that would allow people to share some photos with their friends only, and some with the rest of the world; that won't required a full-time paid person to maintain. That won't be obsolete and require a migration to a new platform in a year. That isn't Facebook or Google Plus. I stopped taking photos because it's too much work to show them to people. Other people use Instagram. Whatever the hipster trend is for photo sharing today, October 25, 2013, I'm pretty sure it's not a Free platform. But we can do something. We technology leaders. We are those who drive technological change! For example, today I invested two hours of hard effort trying to figure out why libbuffy's test suite fails on kfreebsd. All while wondering why I was doing that, since I know all buffy's users personally, and none of them uses kfreebsd. And I will take a day off work to study the library symbols file specification, so that next time I'll know right away if the new version of a C++ compiler decides that a template-generated symbol isn't worth adding to a library anymore. What is this effort really about? It sometimes feel like micromanaging to me. It's good to have excellent quality standards. But not without a vision. Not until "reliable network printing with all PDF viewers and print servers we ship" is among our release goals. Not until we commit to making sure that "sharing files between Debian users" will work out of the box, without the need of going through a 3rd party website, or email. I'm not interested in spending energy discussing init systems. I'm interested in spending energy sharing stories of what cool stuff we can do in Debian today, out of the box. And what cool stuff we'll be able to do tomorrow. Let's spend time on IRC, on mailing lists, and at the next Debian events, talking about why we are really into this. Talking about a vision! Note: Please don't spend time telling me how to fix the problems I mentioned above. I'm not interested in help fixing some problems for me today. I'm interested in asking help fixing problems for everybody, right in the next stable release. Remember, remember, the 5th of November, 2014.

15 October 2013

Francois Marier: The Perils of RAID and Full Disk Encryption on Ubuntu 12.04

I've been using disk encryption (via LUKS and cryptsetup) on Debian and Ubuntu for quite some time and it has worked well for me. However, while setting up full disk encryption for a new computer on a RAID1 partition, I discovered that there are a few major problems with RAID on Ubuntu.

My Setup: RAID and LUKS Since I was setting up a new machine on Ubuntu 12.04 LTS (Precise Pangolin), I used the alternate CD (I burned ubuntu-12.04.3-alternate-amd64+mac.iso to a blank DVD) to get access to the full disk encryption options. First, I created a RAID1 array to mirror the data on the two hard disks. Then, I used the partition manager built into the installer to setup an unencrypted boot partition (/dev/md0 mounted as /boot) and an encrypted root partition (/dev/md1 mounted as /) on the RAID1 array. While I had done full disk encryption and mirrored drives before, I had never done them at the same time on Ubuntu or Debian.

The problem: cannot boot an encrypted degraded RAID After setting up the RAID, I decided to test it by booting from each drive with the other one unplugged. The first step was to ensure that the system is configured (via dpkg-reconfigure mdadm) to boot in "degraded mode". When I rebooted with a single disk though, I received a evms_activate is not available error message instead of the usual cryptsetup password prompt. The exact problem I ran into is best described in this comment (see this bug for context). It turns out that booting degraded RAID arrays has been plagued with several problems.

My solution: an extra initramfs boot script to start the RAID array The underlying problem is that the RAID1 array is not started automatically when it's missing a disk and so cryptsetup cannot find the UUID of the drive to decrypt (as configured in /etc/crypttab). My fix, based on a script I was lucky enough to stumble on, lives in /etc/initramfs-tools/scripts/local-top/cryptraid:
#!/bin/sh
PREREQ="mdadm"
prereqs()
 
     echo "$PREREQ"
 
case $1 in
prereqs)
     prereqs
     exit 0
     ;;
esac
cat /proc/mdstat
mdadm --run /dev/md1
cat /proc/mdstat
After creating that file, remember to:
  1. make the script executable (using chmod a+x) and
  2. regenerate the initramfs (using dpkg-reconfigure linux-image-KERNELVERSION).
To make sure that the script is doing the right thing:
  1. press "Shift" while booting to bring up the Grub menu
  2. then press "e" to edit the default boot line
  3. remove the "quiet" and "splash" options from the kernel arguments
  4. press F10 to boot with maximum console output
You should see the RAID array stopped (look for the output of the first cat /proc/mdstat call) and then you should see output from a running degraded RAID array.

Backing up the old initramfs If you want to be extra safe while testing this new initramfs, make sure you only reconfigure one kernel at a time (no update-initramfs -u -k all) and make a copy of the initramfs before you reconfigure the kernel:
cp /boot/initrd.img-KERNELVERSION-generic /boot/initrd.img-KERNELVERSION-generic.original
Then if you run into problems, you can go into the Grub menu, edit the default boot option and make it load the .original initramfs.

18 September 2013

Francois Marier: Presenting from a separate user account

While I suspect that professional speakers have separate presentation laptops that they use only to give talks, I don't do this often enough to justify the hassle and cost of a separate machine. However, I do use a separate user account to present from. It allows me to focus on my presentation and not stress out about running into configuration problems or exposing private information. But mostly, I think it's about removing anything that could be distracting for the audience. The great thing of having a separate user for this is that you can do whatever you want in your normal account and still know that the other account is ready to go and configured for presenting on a big screen.

Basics The user account I use when giving talks is called presenter and it has the same password as my main user account, just to keep things simple. However, it doesn't need to belong to any of the UNIX groups that my main user account belongs to. In terms of configuration, it looks like this:
  • power management and screen saver are turned off
  • all sound effects are turned off
  • window manager set to the default one (as opposed to a weird tiling one)
  • desktop notifications are turned off
Of course, this user account only has the software I need while presenting. You won't find an instant messaging, IRC or Twitter client running on there. The desktop only has the icons I need for the presentation: slides and backup videos (in case the network is down and/or prevents me from doing a live demo).

Web browsers While I usually have my presentations in PDF format (for maximum compatibility, you never know when you'll have to borrow someone else's laptop), I use web browsers (different ones to show that my demos work with all of them) all the time for demos. Each browser:
  • clears everything (cookies, history, cache, etc.) at the end of the session
  • has the site I want to demo as the homepage
  • only contains add-ons or extensions I need for the demos
  • has the minimal number of toolbars and doesn't have any bookmarks
  • has search suggestions turned off
  • never asks to remember passwords

Terminal and editors Some of my demos may feature coding demos and running scripts, which is why I have my terminal and editor set to:
  • a large font size
  • a color scheme with lots of contrast
It's all about making sure that the audience can see everything and follow along easily. Also, if you have the sl package installed system-wide, you'll probably want to put the following in your ~/.bashrc:
alias sl="ls"
alias LS="ls"

Rehearsal It's very important to rehearse the whole presentation using this account to make sure that you have everything you need and that you are comfortable with the configuration (for example, the large font size). If you have access to a projector or a digital TV, try connecting your laptop to it. This will ensure that you know how to change the resolution of the external monitor and to turn mirroring ON and OFF ahead of time (especially important if you never use the default window manager or desktop environment). I keep a shortcut to the display settings in the sidebar.

Localization Another thing I like to do is to set my operating system and browser locales to the one where I am giving a talk, assuming that it is a western language I can understand to some extent. It probably doesn't make a big difference, but I think it's a nice touch and a few people have commented on this in the past. My theory is that it might be less distracting to the audience if they are shown the browser UI and menus they see every day. I'd love to have other people's thoughts on this point though. Also, pay attention to the timezone since it could be another source of distraction as audience members try to guess what timezone your computer is set to.

Anything else? If you also use a separate account for your presentations, I'd be curious to know whether there are other things you've customised. If there's anything I missed, please leave a comment!

8 August 2013

Francois Marier: Server Migration Plan

I recently had to migrate the main Libravatar server to a new virtual machine. In order to minimize risk and downtime, I decided to write a migration plan ahead of time. I am sharing this plan here in case it gives any ideas to others who have to go through a similar process.

Prepare DNS
  • Change the TTL on the DNS entry for libravatar.org to 3600 seconds.
  • Remove the mirrors I don't control from the DNS load balancer (cdn and seccdn).
  • Remove the main server from cdn and seccdn in DNS.

Preparing the new server
  • Setup the new server.
  • Copy the database from the old site and restore it.
  • Copy /var/lib/libravatar from the old site.
  • Hack my local /etc/hosts file to point to the new server's IP address:
    xxx.xxx.xxx.xxx www.libravatar.org stats.libravatar.org cdn.libravatar.org
    
  • Test all functionality on the new site.

Preparing the old server
  • Prepare a static "under migration" Apache config in /etc/apache2/sites-enables.static/:
    <VirtualHost *:80>
        RewriteEngine On
        RewriteRule ^ https://www.libravatar.org [redirect=301,last]
    </VirtualHost>
    <VirtualHost *:443>
        SSLEngine on
        SSLProtocol TLSv1
        SSLHonorCipherOrder On
        SSLCipherSuite RC4-SHA:HIGH:!kEDH
        SSLCertificateFile /etc/libravatar/www.crt
        SSLCertificateKeyFile /etc/libravatar/www.pem
        SSLCertificateChainFile /etc/libravatar/www-chain.pem
        RewriteEngine On
        RewriteRule ^ /var/www/migration.html [last]
        <Directory /var/www>
            Allow from all
            Options -Indexes
        </Directory>
    </VirtualHost>
    
  • Put this static file in /var/www/migration.html:
    <html>
    <body>
    <p>We are migrating to a new server. See you soon!</p>
    <p>- <a href="http://identi.ca/libravatar">@libravatar</a></p>
    </body>
    </html>
    
  • Enable the rewrite module:
    a2enmod rewrite
    
  • Prepare an Apache config proxying to the new server in /etc/apache2/sites-enabled.proxy/:
    <VirtualHost *:80>
        RewriteEngine On
        RewriteRule ^ https://www.libravatar.org [redirect=301,last]
    </VirtualHost>
    <VirtualHost *:443>
        SSLEngine on
        SSLProtocol TLSv1
        SSLHonorCipherOrder On
        SSLCipherSuite RC4-SHA:HIGH:!kEDH
        SSLCertificateFile /etc/libravatar/www.crt
        SSLCertificateKeyFile /etc/libravatar/www.pem
        SSLCertificateChainFile /etc/libravatar/www-chain.pem
        SSLProxyEngine on
        ProxyPass / https://www.libravatar.org/
        ProxyPassReverse / https://www.libravatar.org/
    </VirtualHost>
    
  • Enable the proxy-related modules for Apache:
    a2enmod proxy
    a2enmod proxy_connect
    a2enmod proxy_http
    

Migrating servers
  • Tweet and dent about the upcoming migration.
  • Enable the static file config on the old server (disabling the Django app).
  • Copy the database from the old server and restore it on the new server.
  • Copy /var/lib/libravatar from the old server to the new one.

Disable mirror sync
  • Log into each mirror and comment out the sync cron jobs in /etc/cron.d/libravatar-slave.
  • Make sure mirrors are no longer able to connect to the old server by deleting /var/lib/libravatar/master/.ssh/authorized_keys on the old server.

Testing the main site
  • Hack my local /etc/hosts file to point to the new server's IP address:
    xxx.xxx.xxx.xxx www.libravatar.org stats.libravatar.org cdn.libravatar.org
    
  • Test all functionality on the new site.
  • If testing is successful, update DNS to point to the new server with a short TTL (in case we need to revert).
  • Enable the proxy config on the old server.
  • Hack my local /etc/hosts file to point to the old server's IP address.
  • Test basic functionality going through the proxy.
  • Remove local /etc/hosts/ hacks.

Re-enable mirror sync
  • Build a new libravatar-slave package with an updated known_hosts file for the new server.
  • Log into each server I control and update that package.
  • Test the connection to the master (hacking /etc/hosts on the mirror if needed):
    sudo -u libravatar-slave ssh libravatar-master@0.cdn.libravatar.org
    
  • Uncomment the sync cron jobs in /etc/cron.d/libravatar-slave.
  • An hour later, make sure that new images are copied over and that the TLS certs are still working.
  • Remove /etc/hosts hacks from all mirrors.

Post migration steps
  • Tweet and dent about the fact that the migration was successful.
  • Send a test email to the support address included in the tweet/dent.
  • Take a backup of config files and data on the old server in case I forgot to copy something to the new one.
  • Get in touch with mirror owners to tell them to update libravatar-slave package and test ssh configuration.
  • Add third-party controlled mirrors back to the DNS load-balancer once they are up to date.
  • A few days later, change the TTL for the main site back to 43200 seconds.
  • A week later, kill the proxy on the old server by shutting it down.

4 August 2013

Francois Marier: Debugging Gearman configuration

Gearman is a queuing system that has been in Debian for a long time and is quite reliable. I ran into problems however when upgrading a server from Debian squeeze to wheezy however. Here's how I debugged my Gearman setup.

Log verbosity First of all, I started by increasing the verbosity level of the daemon by adding --verbose=INFO to /etc/default/gearman-job-server (the possible values of the verbose option are in the libgearman documentation) and restarting the daemon:
/etc/init.d/gearman-job-server restart
I opened a second terminal to keep an eye on the logs:
tail -f /var/log/gearman-job-server/gearman.log

Listing available workers Next, I registered a very simple worker:
gearman -w -f mytest cat
and made sure it was connected properly by telneting into the Gearman process:
telnet localhost 4730
and listing all currently connected workers using the workers command (one of the commands available in the Gearman TEXT protocol). There should be an entry similar to this one:
30 127.0.0.1 - : mytest
Because there are no exit or quit commands in the TEXT protocol, you need to terminate the telnet connection like this:
  1. Press Return
  2. Press Ctrl + ]
  3. Press Return
  4. Type quit at the telnet prompt and press Return.
Finally, I sent some input to the simple worker I setup earlier:
echo "hi there"   gearman -f mytest
and got my input repeated on the terminal:
hi there

Gearman bug I traced my problems down to this error message when I sent input to the worker:
gearman: gearman_client_run_tasks : connect_poll(Connection refused)
getsockopt() failed -> libgearman/connection.cc:104
It turns out that it is a known bug that was fixed upstream but still affects Debian wheezy and some versions of Ubuntu. The bug report is pretty unhelpful since the work-around is hidden away in the comments of this "invalid" answer: be explicit about the hostname and port number in both gearman calls. So I was able to make it work like this:
gearman -w -h 127.0.0.1 -p 4730 -f mytest cat    
echo "hi there"   gearman -h 127.0.0.1 -p 4730 -f mytest
where the hostname matches exactly what's in /etc/default/gearman-job-server.

28 July 2013

Francois Marier: FISL for foreigners HOWTO

FISL (pronounced FIZZ lay by the locals) is a large Free and Open Source software gathering in Porto Alegre, Brazil. While the primary audience of that conference is the Latin America "libre software" community, a number of overseas speakers also participate in that conference. This is my contribution to them: a short guide for international guests visiting the F rum Internacional do Software Livre.

Planning Before you fly out, make sure you look up the visa requirements for your country of citizenship. Many western countries will require a visa and you will need to visit the local Brazillian embassy ahead of time to get one. Next, have a look at the list of recommended immunizations. As with most destinations, it is recommended that your routine immunizations be up to date, but there are also other specialized ones such as Yellow Fever that are recommended by the Brazillian government. You should therefore visit a travel clinic a few weeks ahead of time. Other than that, I suggest reading up on the country and keeping an eye on the various travel advisories.

Arrival You will be flying at the Porto Alegre airport. If you need to exchange overseas money for Brazilian Reals, you can do that there. You'll probably also want to pick up a power adapter at the airport if you intend to charge your laptop while you're in the country :) Brazil has both 127V and 220V outlets using Type N sockets. Privacy note: using the free airport wifi will require giving your Passport details as part of the registration process.

Language If you don't speak Portuguese, expect a few challenges since most of the people you'll meet (including taxi drivers, many airport workers, some hotel staff) won't speak English. I highly recommend getting a phrase book before you leave and printing paper maps of where you are planning to go (to show to taxi drivers when you get lost). Native Spanish speakers seem to get by speaking Spanish to Portuguese speakers and understanding enough Portuguese to hold a conversation. I wouldn't count on it unless your Spanish is quite good though. Also, the official conference blog posts get eventually translated to English, but there is a delay, so you may want to subscribe to the Portuguese feed and use Google Translate to keep up with FISL news before you get there.

The conference FISL is a large conference and it has a very "decentralized" feel to it. From the outside, it looks like it's organized by an army of volunteers where everyone is taking care of some portion of it without a whole lot of top-down direction. It seems to work quite well! What this means for you as a foreign speaker however is that you're unlikely to be provided with a lot of information or help finding your way around the conference (i.e. no "speaker liaison"). There is a separate registration desk for speakers but that's about all of the attention you'll receive before you deliver your talk. So make sure you know where to go and show up in your assigned room early to speak with the person introducing you. If your talk is in English, it will be live-transated by an interpreter. It's therefore a good idea to speak a bit more slowly and to pause a bit more. Other than that, the organizers make an effort to schedule an English talk in each timeslot so non-Portuguese speakers should still be able to get a lot out of the conference. FISL was a lot of fun for me and I hope that some of these tips will help you enjoy the biggest FLOSS gathering in the southern hemisphere!

18 May 2013

Francois Marier: Three wrappers to run commands without impacting the rest of the system

Most UNIX users have heard of the nice utility used to run a command with a lower priority to make sure that it only runs when nothing more important is trying to get a hold of the CPU:
nice long_running_script.sh
That's only dealing with part of the problem though because the CPU is not all there is. A low priority command could still be interfering with other tasks by stealing valuable I/O cycles (e.g. accessing the hard drive).

Prioritizing I/O Another Linux command, ionice, allows users to set the I/O priority to be lower than all other processes. Here's how to make sure that a script doesn't get to do any I/O unless the resource it wants to use is idle:
sudo ionice -c3 hammer_disk.sh
The above only works as root, but the following is a pretty good approximation that works for non-root users as well:
ionice -n7 hammer_disk.sh
You may think that running a command with both nice and ionice would have absolutely no impact on other tasks running on the same machine, but there is one more aspect to consider, at least on machines with limited memory: the disk cache.

Polluting the disk cache If you run a command (for example a program that goes through the entire file system checking various things, you will find that the kernel will start pulling more files into its cache and expunge cache entries used by other processes. This can have a very significant impact on a system as useful portions of memory are swapped out. For example, on my laptop, the nightly debsums, rkhunter and tiger cron jobs essentially clear my disk cache of useful entries and force the system to slowly page everything back into memory as I unlock my screen saver in the morning. Thankfully, there is now a solution for this in Debian: the nocache package. This is what my long-running cron jobs now look like:
nocache ionice -c3 nice long_running.sh

Turning off disk syncs Another relatively unknown tool, which I would certainly not recommend for all cron jobs but is nevertheless related to I/O, is eatmydata. If you wrap it around a command, it will run without bothering to periodically make sure that it flushes any changes to disk. This can speed things up significantly but it should obviously not be used for anything that has important side effects or that cannot be re-run in case of failure. After all, its name is very appropriate. It will eat your data!

1 April 2013

Francois Marier: Poor man's RAID1 between an SSD and a hard drive

After moving from a hard drive to an SSD on my work laptop, I decided to keep the hard drive spinning and use it as a backup for the SSD. With the following setup, I can pull the SSD out of my laptop and it should still boot up normally with all of my data on the hard drive.

Manually setting up an encrypted root partition Before setting up the synchronization between the two drives, I had to replicate the partition setup. I used fdisk, cfdisk and gparted to create the following partitions:
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048      499711      248832   83  Linux
/dev/sdb2          501760   500117503   249807872    5  Extended
/dev/sdb5          503808   500117503   249806848   83  Linux
and then loosely followed these instructions to create an encrypted root partition on /dev/sdb5:
$ cryptsetup luksFormat /dev/sdb5
$ cryptsetup luksOpen /dev/sdb5 sdb5_crypt
$ pvcreate /dev/mapper/sdb5_crypt
$ vgcreate akranes2 /dev/mapper/sdb5_crypt
$ vgchange -a y akranes2
$ lvcreate -L247329718272B -nroot akranes2
$ lvcreate -L8468299776B -nswap_1 akranes2
$ mkfs.ext4 /dev/akranes2/root
Finally, I added the new encrypted partition to the list of drives to bring up at boot time by looking up its UUID:
$ cryptsetup luksUUID sdb5_crypt
and creating a new entry for it in /etc/crypttab.

Copying the boot partition Setting up the boot partition was much easier because it's not encrypted. All that was needed was to format it and then copy the files over:
$ mkfs.ext2 /dev/sdb1
$ mount /dev/sdb1 /mnt/boot
$ cp -a /boot/* /mnt/boot/
The only other thing to remember is to install grub on the boot loader of that drive. On modern Debian systems, that's usually just a matter of running dpkg-reconfigure grub-pc and adding the second drive (/dev/sdb in my case) to the list of drives to install grub on.

Sync scripts To keep the contents of the SSD and the hard drive in sync, I set up a regular rsync of the root and boot partitions using the following mount points (as defined in /etc/fstab):
/dev/mapper/akranes-root /           ext4    noatime,discard,errors=remount-ro 0       1
/dev/mapper/akranes2-root /mnt/root  ext4    noatime,errors=remount-ro         0       2
UUID=0b9109d0-... /boot              ext2    defaults                          0       2
UUID=6e6f05fb-... /mnt/boot          ext2    defaults                          0       2
I use this script (/usr/local/sbin/ssd_boot_backup) for syncing the boot partition:
#!/bin/sh
if [ ! -e /mnt/boot/hdd.mounted ] ; then
    echo "The rotating hard drive is not mounted in /mnt/boot."
    exit 1
fi
if [ ! -e /boot/ssd.mounted ] ; then
    echo "The ssd is not the boot partition"
    exit 1
fi
nice ionice -c3 rsync -aHx --delete --exclude=/ssd.mounted --exclude=/lost+found/* /boot/* /mnt/boot/
and a similar one (/usr/local/sbin/ssd_root_backup) for the root partition:
#!/bin/sh
if [ ! -e /mnt/root/hdd.mounted ] ; then
    echo "The rotating hard drive is not mounted in /mnt/root."
    exit 1
fi
if [ ! -e /ssd.mounted ] ; then
    echo "The ssd is not the root partition"
    exit 1
fi
nice ionice -c3 rsync -aHx --delete --exclude=/dev/* --exclude=/proc/* --exclude=/sys/* --exclude=/tmp/* --exclude=/boot/* --exclude=/mnt/* --exclude=/lost+found/* --exclude=/media/* --exclude=/var/tmp/* --exclude=/ssd.mounted --exclude=/var/lib/lightdm/.gvfs --exclude=/home/francois/.gvfs /* /mnt/root/
To ensure that each drive is properly mounted before the scripts run, I created empty ssd.mounted files in the root directory of each of the partitions on the SSD, and empty hdd.mounted files in the root directory of the hard drive partitions.

Cron jobs The sync scripts are run every couple of hours through this crontab:
10 */4 * * *                root    /usr/local/sbin/ssd_boot_backup
20 0,4,8,12,16,20 * * *     root    /usr/local/sbin/ssd_root_backup
20 2,6,10,14,18,22 * * *    root    /usr/bin/on_ac_power && /usr/local/sbin/ssd_root_backup
which includes a reduced frequency while running on battery to avoid spinning the hard drive up too much.

22 February 2013

Francois Marier: Doing a fresh Debian/Ubuntu install without having to reconfigure everything

Taking advantage of a new hard drive, I decided to reinstall my Ubuntu (Precise 12.04.2) laptop from scratch so that I could easily enable full-disk encryption (a requirement for Mozilla work laptops). Reinstalling and reconfiguring everything takes a bit of time though, so here's the procedure I followed to keep the configuration to a minimum.

Install Ubuntu/Debian on the new drive While full-disk encryption is built into the graphical installer as of Ubuntu 12.10, it's only available through the alternate install CD in 12.04 and earlier. Using that CD, install Ubuntu on the new drive, making sure you select the "Encrypted LVM" option when you get to the partitioning step. (The procedure is the same if you use a Debian CD.) To make things easy, the first user you create should match exactly the one from your previous installation.

Copy your home directory Once the OS is installed on the new drive, plug the old one back in (in an external enclosure if you need to) and mount its root partition as a read-only drive on /mnt. Then log in as root (not just using sudo, actually login using the root user) and copy your home directory on the new drive:
rm -rf /home/*
cp -a /mnt/home/* /home/
Then you should be able to log in as your regular user.

Reinstall all packages The next step is to reinstall all of the packages you had installed on the old OS. But first of all, let's avoid having to answer all of the debconf questions we've already answered in the past:
rm -rf /var/cache/debconf
cp -a /mnt/var/cache/debconf /var/cache/
and set the debconf priority to the one you usually use (medium in my case):
dpkg-reconfigure debconf
Next, make sure you have access to all of the necessary repositories:
cp -a /mnt/etc/apt/sources.list /etc/apt/
cp -a /mnt/etc/apt/sources.list.d/* /etc/apt/sources.list.d/
apt-get update

Getting and setting the list of installed packages To get a list of the packages that were installed on the old drive, use dpkg on the old install:
mount -o remount,rw /mnt
chroot /mnt
dpkg --get-selections > /packages
exit
mount -o remount,ro -r /mnt
Use that list on the new install to reinstall everything:
dpkg --set-selections < /mnt/packages
apt-get dselect-upgrade

Selectively copy configuration files over Finally, once all packages are installed, you can selectively copy the config files from the old drive (in /mnt/etc) to the new (/etc/). In particular, make sure you include these ones:
cp -a /mnt/etc/alternatives/ /mnt/etc/default/ /etc/
(I chose not to just overwrite all config files with the old ones because I wanted to get rid of any cruft that had accumulated there and suspected that there might be some slight differences due to the fresh install of the distro.)

Reboot Once that's done, you should really give that box a restart to ensure that all service are using the right config files, but otherwise, that's it. Your personal data and config are back, all of your packages are installed and configured the way they were, and everything is fully encrypted!

16 January 2013

Francois Marier: Moving from Blogger to Ikiwiki and Branchable

In order to move my blog to a free-as-in-freedom platform and support the great work that Joey (of git-annex fame) and Lars (of GTD for hackers fame) have put into their service, I decided to convert my Blogger blog to Ikiwiki and host it on Branchable. While the Ikiwiki tips page points to some old instructions, they weren't particularly useful to me. Here are the steps I followed.

Exporting posts and comments from Blogger Thanks to Google letting people export their own data from their services, I was able to get a full dump (posts, comments and metadata) of my blog in Atom format. To do this, go into "Settings Other" then look under "Blog tools" for the "Export blog" link.

Converting HTML posts to Markdown Converting posts from HTML to Markdown involved a few steps:
  1. Converting the post content using a small conversion library to which I added a few hacks.
  2. Creating the file hierarchy that ikiwiki requires.
  3. Downloading images from Blogger and fixing their paths in the article text.
  4. Extracting comments and linking them to the right posts.
The Python script I wrote to do all of the above will hopefully be a good starting point for anybody wanting to migrate to Ikiwiki.

Maintaining old URLs In order to make sure I wouldn't break any existing links pointing to my blog on Blogger, I got the above Python script to output a list of Apache redirect rules and then found out that I could simply email these rules to Joey and Lars to get them added to my blog. My rules look like this:
# Tagged feeds
Redirect permanent /feeds/posts/default/-/debian http://feeding.cloud.geek.nz/tags/debian/index.rss
Redirect permanent /search/label/debian http://feeding.cloud.geek.nz/tags/debian
# Main feed (needs to come after the tagged feeds)
Redirect permanent /feeds/posts/default http://feeding.cloud.geek.nz/index.rss
# Articles
Redirect permanent /2012/12/keeping-gmail-in-separate-browser.html http://feeding.cloud.geek.nz/posts/keeping-gmail-in-separate-browser/
Redirect permanent /2012/11/prefetching-resources-to-prime-browser.html http://feeding.cloud.geek.nz/posts/prefetching-resources-to-prime-browser/

Collecting analytics Since I am no longer using Google Analytics on my blog, I decided to take advantage of the access log download feature that Joey recently added to Branchable. Every night, I download my blog's access log and then process it using awstats. Here is the cron job I use:
#!/bin/bash
BASEDIR=/home/francois/documents/branchable-logs
LOGDIR=/var/log/feedingthecloud
# Download the current access log
LANG=C LC_PAPER= ssh -oIdentityFile=$BASEDIR/branchable-logbot b-feedingthecloud@feedingthecloud.branchable.com logdump > $LOGDIR/access.log
It uses a separate SSH key I added through the Branchable control panel and outputs to a file that gets overwritten every day. Next, I installed the awstats Debian package, and configured it like this:
$ cat /etc/awstats/awstats.conf.local
SiteDomain=feedingthecloud.branchable.com
LogType=W
LogFormat=1
LogFile="/var/log/feedingthecloud/access.log"
Even if you're not interested in analytics, I recommend you keep an eye on the 404 errors for a little while after the move. This has helped me catch a critical redirection I had forgotten.

Limiting Planet feeds One of the most common things that happen right after someone migrates to a new blogging platform is the flooding of any aggregator that subscribes to their blog. The usual cause being the change in post identifiers. Unsurprisingly, Ikiwiki already had a few ways to avoid this problem. I chose to simply modify each tagged feed and limit them to the posts added after the move to Branchable.

Switching DNS Having always hosted my blog on a domain I own, all I needed to do to move over to the new platform without an outage was to change my CNAME to point to feedingthecloud.branchable.com. I've kept the Blogger blog alive and listening on feeding.cloud.geek.nz to ensure that clients using a broken DNS resolver (which caches records for longer than requested via the record's TTL) continue to see the old posts.

20 December 2012

Francois Marier: Keeping GMail in a separate browser profile

I wanted to be able to use the GMail web interface on my work machine, but for privacy reasons, I prefer not to be logged into my Google Account on my main browser. Here's how I make use of a somewhat hidden Firefox feature to move GMail to a separate browser profile. Creating a separate profile The idea behing browser profiles is simple: each profile has separate history, settings, bookmarks, cookies, etc. To create a new one, simply start Firefox with this option:
firefox -ProfileManager
to display a dialog which allows you to create new profiles:
Once you've created a new "GMail" profile, you can start it up from the profile manager or directly from the command-line:
firefox -no-remote -P GMail
(The -no-remote option ensures that a new browser process is created for it.) To make this easier, I put the command above in a tiny gmail shell script that lives in my ~/bin/ directory. I can use it to start my "GMail browser" by simply typing gmail. Tuning privacy settings for 2-step authentication While I initially kept that browser profile in private browsing mode, this was forcing me to enter my 2-factor authentication credentials every time I started the browser. So to avoid having to use Google Authenticator (or its Firefox OS cousin) every day, I ended up switching to custom privacy settings and enabling all cookies:
It turns out however that there is a Firefox extension which can selectively delete unwanted cookies while keeping useful ones. Once that add-on is installed and the browser restarted, simply add accounts.google.com to the whitelist and set it to clear cookies when the browser is closed:
Then log into GMail and tick the "Trust this computer" checkbox at the 2-factor prompt:
With these settings, your browsing history will be cleared and you will be logged out of GMail every time you close your browser but will still be able to skip the 2-factor step on that device.

18 September 2012

Francois Marier: Advice to newcomers and veterans of the Free and Open Source software communities

A few months ago, a collection of essays called Open Advice was released. It consists of the things that seasoned Free and Open Source developers wish they had known when they got started. The LWN book review is the best way to get a good feel for what it's about, but here are the notes I took while reading it:

Next.

Previous.